Goto

Collaborating Authors

 neural sequence


A Recurrent Neural Circuit Mechanism of T emporal-scaling Equivariant Representation

Neural Information Processing Systems

Time perception is fundamental in our daily life. An important feature of time perception is temporal scaling (TS): the ability to generate temporal sequences (e.g., movements) with different speeds. However, it is largely unknown about the mathematical principle underlying TS in the brain.




A Recurrent Neural Circuit Mechanism of T emporal-scaling Equivariant Representation

Neural Information Processing Systems

Time perception is fundamental in our daily life. An important feature of time perception is temporal scaling (TS): the ability to generate temporal sequences (e.g., movements) with different speeds. However, it is largely unknown about the mathematical principle underlying TS in the brain.




Automatic Identification of Indicators of Compromise using Neural-Based Sequence Labelling

Zhou, Shengping, Long, Zi, Tan, Lianzhi, Guo, Hao

arXiv.org Artificial Intelligence

Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily on hand-crafted features with expert knowledge of cybersecurity, and require a large amount of supervised training corpora to train an IOC classifier. In this paper, we propose using a neural-based sequence labelling model to identify IOCs automatically from reports on cybersecurity without expert knowledge of cybersecurity. Our work is the first to apply an end-to-end sequence labelling to the task in IOCs identification. By using an attention mechanism and several token spelling features, we find that the proposed model is capable of identifying the low frequency IOCs from long sentences contained in cybersecurity reports. Experiments show that the proposed model outperforms other sequence labelling models, achieving over 88% average F1-score.


Neural Models for Sequence Chunking

Zhai, Feifei (IBM Watson) | Potdar, Saloni (IBM Watson) | Xiang, Bing (IBM Watson) | Zhou, Bowen (IBM Watson)

AAAI Conferences

Many natural language understanding (NLU) tasks, such as shallow parsing (i.e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence. Most of the current deep neural network (DNN) based methods consider these tasks as a sequence labeling problem, in which a word, rather than a chunk, is treated as the basic unit for labeling. These chunks are then inferred by the standard IOB (Inside-Outside- Beginning) labels. In this paper, we propose an alternative approach by investigating the use of DNN for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling. Experimental results show that the proposed neural sequence chunking models can achieve start-of-the-art performance on both the text chunking and slot filling tasks.